-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Subcycling method1 #4
Conversation
…cles to the main grid
@@ -647,7 +679,7 @@ WarpX::AllocLevelData (int lev, const BoxArray& ba, const DistributionMapping& d | |||
Efield_cax[lev][1].reset( new MultiFab(amrex::convert(cba,Ey_nodal_flag),dm,1,ngE)); | |||
Efield_cax[lev][2].reset( new MultiFab(amrex::convert(cba,Ez_nodal_flag),dm,1,ngE)); | |||
|
|||
gather_buffer_masks[lev].reset( new iMultiFab(ba, dm, 1, 0) ); | |||
gather_buffer_masks[lev].reset( new iMultiFab(ba, dm, 1, 1) ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WeiqunZhang Could you explain why you changed 0
into 1
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because of subcycling, particles could end up in ghost cells after first fine substep. Adding one extra ghost cell to the mask is cheap.
@@ -657,7 +689,7 @@ WarpX::AllocLevelData (int lev, const BoxArray& ba, const DistributionMapping& d | |||
if (do_dive_cleaning) { | |||
charge_buf[lev].reset( new MultiFab(amrex::convert(cba,IntVect::TheUnitVector()),dm,2,ngRho)); | |||
} | |||
current_buffer_masks[lev].reset( new iMultiFab(ba, dm, 1, 0) ); | |||
current_buffer_masks[lev].reset( new iMultiFab(ba, dm, 1, 1) ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
@@ -510,7 +510,7 @@ WarpX::SyncCurrent (const std::array<const amrex::MultiFab*,3>& fine, | |||
int ref_ratio) | |||
{ | |||
BL_ASSERT(ref_ratio == 2); | |||
const IntVect& ng = fine[0]->nGrowVect()/ref_ratio; | |||
const IntVect& ng = (fine[0]->nGrowVect() + 1) /ref_ratio; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WeiqunZhang Could you explain why we need + 1
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is because 2*(i/2) <= i.
{ | ||
current_cp[lev][0]->setVal(0.0); | ||
current_cp[lev][1]->setVal(0.0); | ||
current_cp[lev][2]->setVal(0.0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WeiqunZhang Why are we setting the currents to 0 on the coarse grid here. Isn't SyncCurrent
going to overwrite those currents anyway?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, my guess is that SyncCurrent
does not overwrite the guard cells, is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They do overwrite ghost cells, but not all ghost cells. Note that cp and fp have the same number of ghost cells, but cp's ghost cells are bigger in size.
const auto& period = Geom(lev).periodicity(); | ||
current_fp[lev][0]->OverrideSync(period); | ||
current_fp[lev][1]->OverrideSync(period); | ||
current_fp[lev][2]->OverrideSync(period); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WeiqunZhang Could you explain what OverrideSync
does?
My understanding is that this function is defined in AMReX, but I could not find a documentation in https://amrex-codes.github.io/amrex/doxygen/classamrex_1_1_multi_fab.html#aec134620b7dfd17d0b8a540e9a96f41c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The data are nodal in some directions. Therefore they could exist on several grids. Because of roundoff errors (e.g. (a+b)+c != a+(b+c)), they are not exactly the same. A while ago, this caused instability in a test. So I added these to make the data consistent. One grid will overwrite the data on other grids.
WarpX::RestrictRhoFromFineToCoarsePatch (int lev) | ||
{ | ||
if (rho_fp[lev]) { | ||
rho_cp[lev]->setVal(0.0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question: Why are we setting the charge density to 0. Isn't SyncRho going to overwrite this anyway?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
void | ||
WarpX::OneStep_sub1 (Real curtime) | ||
{ | ||
// TODO: we could save some charge depositions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WeiqunZhang Could you elaborate on the above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every time we push particles, we compute charge density for t and t+dt. Rho at t+dt could be potentially used as pre-push density for next push.
So let's merge this PR? |
Damp the electric field inside the star for the pulsar setup
…i_setup Enable CircleCI testing in mewarpx
…ffer Reduce the amount of shared memory needed by re-using the same buffer…
* pyAMReX: Build System * refactor callbacks.py * added binding code in `pyWarpX.cpp` to add or remove keys from the callback dictionary * minor PR cleanups * CMake: Undo Unintentional Change Likely from rebase/merge. Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* refactor callbacks.py * added binding code in `pyWarpX.cpp` to add or remove keys from the callback dictionary * minor PR cleanups Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* refactor callbacks.py * added binding code in `pyWarpX.cpp` to add or remove keys from the callback dictionary * minor PR cleanups Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* pyAMReX: Build System * CI Updates (Changed Options) * Callback modernization (#4) * refactor callbacks.py * added binding code in `pyWarpX.cpp` to add or remove keys from the callback dictionary * minor PR cleanups Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Added Python level reference to fetch the multifabs (#3) * pyAMReX: Build System * Added Python level reference to Ex_aux * Now uses the multifab map * Fix typo Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Add initialization and finalize routines (#5) A basic PICMI input file will now run to completion. * Regression Tests: WarpX_PYTHON=ON * Update Imports to nD pyAMReX * IPO/LTO Control Although pybind11 relies heavily on IPO/LTO to create low-latency, small-binary bindings, some compilers will have troubles with that. Thus, we add a compile-time option to optionally disable it when needed. * Fix: Link Legacy WarpXWrappers.cpp * Wrap WarpX instance and start multi particle container * Fix test Python_pass_mpi_comm * Start wrapper for multiparticle container * Add return policy Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Update fields to use MultiFabs directly Remove EOL white space Removed old routines accessing MultiFabs Update to use "node_centered" * Fix compilation with Python * Update fields.py to use modified MultiFab tag names * Remove incorrect, unused code * Add function to extract the WarpX MultiParticleContainer * Complete class WarpXParticleContainer * Wrap functions getNprocs / getMyProc * restore `install___` callback API - could remove later if we want but should maintain backward compatibility for now * add `gett_new` and `getistep` functions wrappers; fix typos in `callbacks.py`; avoid error in getting `rho` from `fields.py` * Update callback call and `getNproc`/`getMyProc` function * Replace function add_n_particles * Fix setitem in fields.py for 1d and 2d * also update `gett_new()` in `_libwarpx.py` in case we want to keep that API * added binding for `WarpXParIter` - needed to port `libwarpx.depositChargeDensity()` which is an ongoing effort * Wrap function num_real_comp * added binding for `TotalNumberOfParticles` and continue progress on enabling 1d MCC test to run * add `SyncRho()` binding and create helper function in `libwarpx.depositChargeDensity` to manage scope of the particle iter * Clean up issues in fields.py * update bindings for `get_particle_structs` * Fix setitem in fields.py * switch order of initialization for particle container and particle iterator * switch deposit_charge loop to C++ code; bind `ApplyInverseVolumeScalingToChargeDensity` * move `WarpXParticleContainer.cpp` and `MultiParticleContainer.cpp` to new Particles folder * added binding for `ParticleBoundaryBuffer` * More fixes for fields.py * Fix: Backtraces from Python Add the Python executable name with an absolute path, so backtraces in AMReX work. See linked AMReX issue for details. * Cleaning * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix: Backtraces from Python Part II Do not add Python script name - it confuses the AMReX ParmParser to build its table. * Fix: CMake Dependencies for Wheel This fixes a racecondition during `pip_install`: it was possible that not all dimensions where yet build from pybind before we start packing them in the wheel for pip install. * MCC Test: Install Callbacks before Run Otherwise hangs in aquiring the gil during shutdown. * addition of `Python/pywarpx/particle_containers.py` and various associated bindings * Fix: CMake Superbuild w/ Shared AMReX We MUST build AMReX as a shared (so/dll/dylib) library, otherwise all the global state in it will cause split-brain situations, where our Python modules operate on different stacks. * add `clear_all()` to callbacks in order to remove all callbacks at finalize * add `-DWarpX_PYTHON=ON` to CI tests that failed to build * add `get_comp_index` and continue to port particle data bindings * Add AMReX Module as `libwarpx_so.amr` Attribute to pass through the already loaded AMReX module with the matching dimensionality to the simulation. * Fix for fields accounting for guard cells * Fix handling of ghost cells in fields * Update & Test: Particle Boundary Scraping * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * CI: Python Updates - modernize Python setups - drop CUDA 11.0 for good and go 11.3+ as documented already ``` Error #3246: Internal Compiler Error (codegen): "there was an error in verifying the lgenfe output!" ``` * CI: Python Updates (chmod) * Add support for cupy in fields.py * Add MultiFab reduction routines * CI: CUDA 11.3 is <= Ubuntu 20.04 * changed `AddNParticles` to take `amrex::Vector` arguments * setup.py: WarpX_PYTHON=ON * update various 2d and rz tests with new APIs * add `-DWarpX_PYTHON_IPO=OFF` to compile string for 2d and 3d Python CI tests to speed up linking * CI: -DpyAMReX_IPO=OFF * CI: -DpyAMReX_IPO=OFF actually adding `=OFF` * CI: Intel Python * CI: macOS Python Executable Ensure we always use the same `python3` executable, as specified by the `PATH` priority. * CMake: Python Multi-Config Build Add support for multi-config generators, especially on Windows. * __init__.py: Windows DLL Support Python 3.8+ on Windows: DLL search paths for dependent shared libraries Refs.: - python/cpython#80266 - https://docs.python.org/3.8/library/os.html#os.add_dll_directory * CI: pywarpx Update our setup.py cannot install pyamrex yet as a dependency. * ABLASTR: `ablastr/export.H` Add a new header to export public globals that are not covered by `WINDOWS_EXPORT_ALL_SYMBOLS`. https://stackoverflow.com/questions/54560832/cmake-windows-export-all-symbols-does-not-cover-global-variables/54568678#54568678 * WarpX: EXPORT Globals in `.dll` files WarpX still uses a lot of globals: - `static` member variables - `extern` global variables These globals cannot be auto-exported with CMake's `WINDOWS_EXPORT_ALL_SYMBOLS` helper and thus we need to mark them manually for DLL export (and import) via the new ABLASTR `ablastr/export.H` helper macros. This starts to export symbols in the: - WarpX and particle container classes - callback hook database map - ES solver * CI: pywarpx Clang CXXFLAGS Down Move CXXFLAGS (`-Werror ...`) down until deps are installed. * GNUmake: Generate `ablastr/export.H` * CMake: More Symbol Exports for Windows * `WarpX-tests.ini`: Simplify Python no-IPO Also avoids subtle differences in compilation that increase compile time. * Update PICMI_inputs_EB_API.py for embedded_boundary_python_API CI test * Fix Python_magnetostatic_eb_3d * Update: Python_restart_runtime_components New Python APIs * Windows: no dllimport for now * CI: Skip `PYINSTALLOPTIONS` For Now * CMake: Dependency Bump Min-Versions for external packages picked up by `find_package`. * Fix F and G_fp names in fields.py * Tests: Disable `Python_pass_mpi_comm` * Wrappers: Cleanup * pyWarpX: Include Cleaning * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fields.py: Fix F and G Wrappers Correct MultiFab names (w/o components). * Remove unused in fields.py * Windows: New Export Headers - ABLASTR: `ablastr/export.H` - WarpX: `Utils/export.H` * removed `WarpInterface.py` since that functionality is now in `particle_containers.py`; removed parts of `WarpXWrappers.cpp` that have been ported to pyamrex * CMake: Link OBJECT Target PRIVATE * CMake: Remove OBJECT Target Simplify and make `app` link `lib` (default: static). Remove OBJECT target. * Fix in fields.py for the components index * Update get_particle_id/cpu As implemented in pyAMReX with AMReX-Codes/pyamrex#165 * WarpX: Update for Private Constructor * Import AMReX Before pyd DLL Call Importing AMReX will add the `add_dll_directory` to a potentially shared amrex DLL on Windows. * Windows CI: Set PATH to amrex_Nd.dll * CMake: AMReX_INSTALL After Python In superbuild, Python can modify `AMReX_BUILD_SHARED_LIBS`. * Clang Win CI: Manually Install requirements Sporadic error is: ``` ... Installing collected packages: pyparsing, numpy, scipy, periodictable, picmistandard ERROR: Could not install packages due to an OSError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\hostedtoolcache\\windows\\Python\\3.11.4\\x64\\Lib\\site-packages\\numpy\\polynomial\\__init__.py' Consider using the `--user` option or check the permissions. ``` * Hopefully final fixes to fields.py * Update getProbLo/getProbHi * Set plasma length strength Co-authored-by: Remi Lehe <remi.lehe@normalesup.org> * Fix fields method to remove CodeQL notice * Update Comments & Some Finalize * Move: set_plasma_lens_strength to MPC --------- Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> Co-authored-by: David Grote <dpgrote@lbl.gov> Co-authored-by: Remi Lehe <remi.lehe@normalesup.org> Co-authored-by: Dave Grote <grote1@llnl.gov> Co-authored-by: Roelof Groenewald <regroenewald@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
…action_III [WIP] Improve radiation reaction III
…_thomas_req Implement features requested by Thomas
This pull request contains subcycling method 1 and deposit on main grid.Description edited by @RemiLehe :
Overview
This pull request implements subcycling (using the so-called method 1). When using subcycling, the coarse patch and mother grid use a coarser timestep than the fine patch. In particular, for the field solver, this means that the timestep of the field solver for the coarse patch / mother grid is closer to the Courant limit when using sub-cycling, and therefore numerical dispersion is reduced. This is the main motivation for this PR.
Validation
Using the example script that was added in
Example/Tests/subcycling
, we get the following results with and without subcycling:with subcycling
![ex01000](https://user-images.githubusercontent.com/6685781/47119255-0c7ae580-d21f-11e8-96fe-eeffda1bb02c.png)
without subcycling
![ex02000](https://user-images.githubusercontent.com/6685781/47118631-b6a53e00-d21c-11e8-89ad-89d17b520e9b.png)
Thanks to the improved dispersion relation on coarse patch and mother grid, the radiation at the patch boundary is reduced.
Implementation details
This version of subcycling only works for 2 levels and with a refinement ratio of 2.
The particles and fields of the fine patch are pushed twice (with dt[coarse]/2) during one PIC iteration.
The particles of the coarse patch and mother grid are pushed only once (with dt[coarse]). The fields on the coarse patch and mother gridare pushed in a way which is equivalent to pushing once only, with a current which is the average of the coarse + fine current at the 2 steps of the fine grid.